discrete undirected graphical model
Adversarially-learned Inference via an Ensemble of Discrete Undirected Graphical Models
Undirected graphical models are compact representations of joint probability distributions over random variables. To solve inference tasks of interest, graphical models of arbitrary topology can be trained using empirical risk minimization. However, to solve inference tasks that were not seen during training, these models (EGMs) often need to be re-trained. Instead, we propose an inference-agnostic adversarial training framework which produces an infinitely-large ensemble of graphical models (AGMs). The ensemble is optimized to generate data within the GAN framework, and inference is performed using a finite subset of these models. AGMs perform comparably with EGMs on inference tasks that the latter were specifically optimized for. Most importantly, AGMs show significantly better generalization to unseen inference tasks compared to EGMs, as well as deep neural architectures like GibbsNet and VAEAC which allow arbitrary conditioning. Finally, AGMs allow fast data sampling, competitive with Gibbs sampling from EGMs.
Adversarially-learned Inference via an Ensemble of Discrete Undirected Graphical Models
Undirected graphical models are compact representations of joint probability distributions over random variables. To solve inference tasks of interest, graphical models of arbitrary topology can be trained using empirical risk minimization. However, to solve inference tasks that were not seen during training, these models (EGMs) often need to be re-trained. Instead, we propose an inference-agnostic adversarial training framework which produces an infinitely-large ensemble of graphical models (AGMs). The ensemble is optimized to generate data within the GAN framework, and inference is performed using a finite subset of these models.
Review for NeurIPS paper: Adversarially-learned Inference via an Ensemble of Discrete Undirected Graphical Models
Weaknesses: I'm a bit confused about the evaluation of the approach. What is learned is a generative model over probabilistic graphical models; however, the focus in experiments I and II is on conditional MAP inference. In this setting, the model is being used as a structured output prediction model, and so comparisons are missing against other structured prediction models, examples being [1], [2], and [3] (see "related work" section for refs). If the primary use of this model is for conditional MAP inference, then it is important to understand how well AGM compares against other similar models. That being said, since the samples themselves are unconditional, this approach is at a disadvantage compared to these other approaches, which condition their "samples" on the input.
Review for NeurIPS paper: Adversarially-learned Inference via an Ensemble of Discrete Undirected Graphical Models
This paper presents a novel method for using undirected graphical models to perform inference on arbitrarily chosen subsets of random variables. Initial reviews all identified this as a novel and significant idea, but also raised several issues, mostly pertaining to the experimental validation. After author response and discussion, the reviewers feel these concerns were sufficiently addressed to recommend accepting this paper.
Learning Higher-Order Graph Structure with Features by Structure Penalty
In discrete undirected graphical models, the conditional independence of node labels Y is specified by the graph structure. We study the case where there is another input random vector X (e.g. The main contribution of this paper is to learn the graph structure and the functions conditioned on X at the same time. We prove that discrete undirected graphical models with feature X are equivalent to mul- tivariate discrete models. The reparameterization of the potential functions in graphical models by conditional log odds ratios of the latter offers advantages in representation of the conditional independence structure.
Adversarially-learned Inference via an Ensemble of Discrete Undirected Graphical Models
Undirected graphical models are compact representations of joint probability distributions over random variables. To solve inference tasks of interest, graphical models of arbitrary topology can be trained using empirical risk minimization. However, to solve inference tasks that were not seen during training, these models (EGMs) often need to be re-trained. Instead, we propose an inference-agnostic adversarial training framework which produces an infinitely-large ensemble of graphical models (AGMs). The ensemble is optimized to generate data within the GAN framework, and inference is performed using a finite subset of these models.
Learning Higher-Order Graph Structure with Features by Structure Penalty
Ding, Shilin, Wahba, Grace, Zhu, Jerry
In discrete undirected graphical models, the conditional independence of node labels Y is specified by the graph structure. We study the case where there is another input random vector X (e.g. The main contribution of this paper is to learn the graph structure and the functions conditioned on X at the same time. We prove that discrete undirected graphical models with feature X are equivalent to mul- tivariate discrete models. The reparameterization of the potential functions in graphical models by conditional log odds ratios of the latter offers advantages in representation of the conditional independence structure.